On Runtime Parallel Scheduling for Processor Load Balancing
نویسنده
چکیده
| Parallel scheduling is a new approach for load balancing. In parallel scheduling, all processors cooperate to schedule work. Parallel scheduling is able to accurately balance the load by using global load information at compile-time or runtime. It provides high-quality load balancing. This paper presents an overview of the parallel scheduling technique. Scheduling algorithms for tree, hypercube, and mesh networks are presented. These algorithms can fully balance the load and maximize locality
منابع مشابه
Parallel Loop Scheduling Approaches for Distributed and Shared Memory Systems
In this paper, we propose different approaches for the parallel loop scheduling problem on distributed as well as shared memory systems. Specifically, we propose adaptive loop scheduling models in order to achieve load balancing, low runtime scheduling, low synchronization overhead and low communication overhead. Our models are based on an adaptive determination of the chunk size and an exploit...
متن کاملThe Importance of Locality in Scheduling and Load Balancing for Multiprocessors
This paper addresses the importance of locality when migrating tasks of a parallel program between processors for load balancing in a multiprocessor. Static and preprocessing task scheduling algorithms work well for certain applications, but irregular problems often require dynamic load balancing. Many heuristics have been developed for scheduling the proper number of iterations of a parallel l...
متن کاملOn Runtime Parallel Scheduling
| Parallel scheduling is a new approach for load balancing. In parallel scheduling, all processors cooperate together to schedule work. Parallel scheduling is able to accurately balance the load by using global load information at compile-time or runtime. It provides a high-quality load balancing. This paper presents an overview of the parallel scheduling technique. Particular scheduling algori...
متن کاملRuntime Incremental Parallel Scheduling (RIPS) on Distributed Memory Computers
| Runtime Incremental Parallel Scheduling (RIPS) is an alternative strategy to the commonly used dynamic scheduling. In this scheduling strategy, the system scheduling activity alternates with the underlying computation work. RIPS utilizes the advanced parallel scheduling technique to produce a low-overhead, high-quality load balancing, as well as adapting to irregular applications. This paper ...
متن کاملCharacterization of Locality Aware Task Scheduling Mechanism
The architectural features of modern computers highlight the need of parallel programming for sustained performance. This paper deals with task based programming to program modern computers. Due to lack of data locality, communication optimization and lack of task characterization support in an existing task scheduling, we intends to overview the characterization of locality aware task scheduli...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- IEEE Trans. Parallel Distrib. Syst.
دوره 8 شماره
صفحات -
تاریخ انتشار 1997